321 research outputs found

    The Differences of Online Review Textual Content: A Cross-Cultural Empirical Study

    Get PDF
    This research do a cross-cultural study by examine the differences of online review textual content between China, America and Australian. Through the online review text segmentation, classifying words by coding schema, calculating word proportion of each category, the research analyzes the differences of online review textual content from the aspects of textual type, content preference and textual emotion. The research finds that, cultural differences have significant effect on the online review textual type, Chinese customers prefer to describe objective facts while American & Australian customers prefer to describe subjective feelings; For textual emotion, Chinese customers prefer to express negative emotions while American & Australian customers prefer to express positive emotions. But cultural differences show no significant effect on the online review content preference

    An Empirical Study of The Effect on Traffic of Large Online Promotion Activities

    Get PDF
    This study selects multiple indicators of Web Analytics to measure the volume and quality of traffic, and collects the time series data of a certain brand’s sales on JD.com from October 27, 2014 to June 30, 2015, using the Structural Time Series Model to analyze the effect of attracting traffic of five large-scale online promotion activities during this period. The results for the case study show that: large-scale online promotion activities have a significant positive effect on total page traffic, but the difference is showed on the quality effect of the page traffic; different activities affect the volume of unpaid traffic differently, while effects on traffic quality are not significant. This analysis may benefit e-commerce sites to develop a better strategy to carry out similar promotion activities

    K-LLVM: A Relatively Complete Semantics of LLVM IR

    Get PDF

    E-business Research in China over the Last Two Decades: a Bibliometric Analysis of Projects Granted by National Social Science Fund of China

    Get PDF
    The fast growth of electronic business activities in China during the last two decades has attracted significant attention from practice as well as academics in different countries. The purpose of this paper is to draw the basic outline of e-business research over the last two decades in China based on e-business research projects granted by National Social Science Fund of China from 1999 to 2017. Changes of the research on e-business over time, subject distribution, geographical distribution, active research institutions and high frequency words have been analyzed. The findings showed that the research subjects on e-business in China over last two decades could be classified into 6 categories: online consumer behavior, trust of e-business, internet business model innovation, rural electronic business, internet financial and macro issues related to e-business. The research on e-business has obtained the attention of researchers from different disciplines including management, economics, law, library, sociology, statistics, sports and journalism. The results have given a clear image of the academic investigation on e-business over the last two decades in China

    Research on Customer Loyalty of Online Short-term Rental Service: A Meta-analysis

    Get PDF
    Online short-term rental service has developed rapidly recently. Various scholars focused on how to improve customer loyalty of online short-term rental service, but their conclusions are usually different. Therefore, we built a comprehensive analysis to derive a unified conclusion. A meta-analysis was conducted according to the effect sizes extracted from 35 empirical articles about customer loyalty of online short-term rental service. The effect of customer loyalty classification was further explored from the two sub-dimensions, behavioral loyalty and composite loyalty. The results of the main effect analysis show that only sustainability has no significant effect on attitude. The loyalty classification analysis proves the validity and particularity of the results from the perspective of sub-dimensions of loyalty. The conclusions of this study will bring significant enlightenment to the academic and industry

    A verification framework suitable for proving large language translations

    Get PDF
    Previously, researchers established some frameworks, such as Morpheus, to specify a compiler translation in a small language and prove the semantic preservation property of the translation in the language under the assumption of sequential consistency. Based on the Morpheus specification language, we extend the verification framework to prove the compiler translation semantic preservation property in a large real-world programming language with a real-world weak concurrency model. The framework combines four different pieces. First, we specify a complete semantics of the K framework and a translation from K to Isabelle as our basis for defining language specifications and proving properties about the specifications. Second, we define a complete operational semantics of LLVM in K, named K-LLVM, including the specifications of all instructions and intrinsic functions in LLVM, as well as the concurrency model of LLVM. Third, to verify the correctness of the K-LLVM operational model, we create an axiomatic model, named Hybrid Axiomatic Timed Relaxed Concurrency Model (HATRMM). The creation of HATRMM is to bridge the traditional C++ candidate execution models and the K-LLVM operational concurrency model. Finally, to enhance our framework to prove the semantic preservation property in a relaxed memory model, we define a new simulation framework, named Per Location Simulation (PLS). PLS is suitable for proving semantic preservation property in a relaxed memory model

    Qunity: A Unified Language for Quantum and Classical Computing (Extended Version)

    Full text link
    We introduce Qunity, a new quantum programming language designed to treat quantum computing as a natural generalization of classical computing. Qunity presents a unified syntax where familiar programming constructs can have both quantum and classical effects. For example, one can use sum types to implement the direct sum of linear operators, exception handling syntax to implement projective measurements, and aliasing to induce entanglement. Further, Qunity takes advantage of the overlooked BQP subroutine theorem, allowing one to construct reversible subroutines from irreversible quantum algorithms through the uncomputation of "garbage" outputs. Unlike existing languages that enable quantum aspects with separate add-ons (like a classical language with quantum gates bolted on), Qunity provides a unified syntax along with a novel denotational semantics that guarantees that programs are quantum mechanically valid. We present Qunity's syntax, type system, and denotational semantics, showing how it can cleanly express several quantum algorithms. We also detail how Qunity can be compiled to a low-level qubit circuit language like OpenQASM, proving the realizability of our design.Comment: 60 pages, presented at QPL 202

    Protecting Memories against Soft Errors: The Case for Customizable Error Correction Codes

    Get PDF
    As technology scales, radiation induced soft errors create more complex error patterns in memories with a single particle corrupting several bits. This poses a challenge to the Error Correction Codes (ECCs) traditionally used to protect memories that can correct only single bit errors. During the last decade, a number of codes have been developed to correct the emerging error patterns, focusing initially on double adjacent errors and later on three bit burst errors. However, as the memory cells get smaller and smaller, the error patterns created by radiation will continue to change and thus new codes will be needed. In addition, the memory layout and the technology used may also make some patterns more likely than others. For example, in some memories, there maybe elements that separate blocks of bits in a word, making errors that affect two blocks less likely. Finally, for a given memory, depending on the data stored, some error patterns may be more critical than others. For example, if numbers are stored in the memory, in most cases, errors on the more significant bits have a larger impact. Therefore, for a given memory and application, to achieve optimal protection, we would like to have a code that corrects a given set of patterns. This is not possible today as there is a limited number of code choices available in terms of correctable error patterns and word lengths. However, most of the codes used to protect memories are linear block codes that have a regular structure and which design can be automated. In this paper, we propose the automation of error correction code design for memory protection. To that end, we introduce a software tool that given a word length and the error patterns that need to be corrected, produces a linear block code described by its parity check matrix and also the bit placement. The benefits of this automated design approach are illustrated with several case studies. Finally, the tool is made available so that designers can easily produce custom error correction codes for their specific needs.Jiaqiang Li and Liyi Xiao would like to acknowledge the support of the Fundamental Research Funds for the Central Universities (Grant No. HIT.KISTP.201404), Harbin science and innovation research special fund (2015RAXXJ003), and Special found for development of Shenzhen strategic emerging industries (JCYJ20150625142543456). Pedro Reviriego would like to acknowledge the support of the TEXEO project TEC2016-80339-R funded by the Spanish Ministry of Economy and Competitivity and of the Madrid Community research project TAPIR-CM Grant No. P2018/TCS-4496
    • …
    corecore